本文介绍了频率卷积神经网络(CNN),用于快速,无创的​​2D剪切波速度(VS)成像的近表面地质材料。在频速度域中运行,可以在用于生成CNN输入的线性阵列,主动源实验测试配置中具有显着的灵活性,这些配置是归一化的分散图像。与波场图像不同,标准化的分散图像对实验测试配置相对不敏感,可容纳各种源类型,源偏移,接收器数量和接收器间距。我们通过将其应用于经典的近乎表面地球物理学问题,即成像两层,起伏的土壤 - 旁质界面的界面来证明频率CNN的有效性。最近,通过开发一个时间距离CNN来研究这个问题,该问题表现出了很大的希望,但在使用不同的现场测试配置方面缺乏灵活性。本文中,新的频道CNN显示出与时距CNN的可比精度,同时提供了更大的灵活性来处理各种现场应用程序。使用100,000个合成近表面模型对频率速度CNN进行了训练,验证和测试。首先,使用训练集的合成近表面模型测试了提议的频率CNN跨各种采集配置概括跨各种采集配置的能力,然后应用于在Austin的Hornsby Bend在Austin的Hornsby Bend收集的实验场数据美国德克萨斯州,美国。当针对更广泛的地质条件范围充分开发时,提出的CNN最终可以用作当前伪2D表面波成像技术的快速,端到端替代方案,或开发用于完整波形倒置的启动模型。
translated by 谷歌翻译
在这项研究中,将放射学方法扩展到用于组织分类的光学荧光分子成像数据,称为“验光”。荧光分子成像正在出现在头颈部鳞状细胞癌(HNSCC)切除期间的精确手术引导。然而,肿瘤到正常的组织对比与靶分子表皮生长因子受体(EGFR)的异质表达的内在生理局限性混淆。验光学试图通过探测荧光传达的EGFR表达中的质地模式差异来改善肿瘤识别。从荧光图像样品中提取了总共1,472个标准化的验光特征。涉及支持矢量机分类器的监督机器学习管道接受了25个顶级功能的培训,这些功能由最小冗余最大相关标准选择。通过将切除组织的图像贴片分类为组织学确认的恶性肿瘤状态,将模型预测性能与荧光强度阈值方法进行了比较。与荧光强度阈值方法相比,验光方法在所有测试集样品中提供了一致的预测准确性(无剂量)(平均精度为89%vs. 81%; P = 0.0072)。改进的性能表明,将放射线学方法扩展到荧光分子成像数据为荧光引导手术中的癌症检测提供了有希望的图像分析技术。
translated by 谷歌翻译
我们提出了一个开源的可区分的声学模拟器J-Wave,可以解决时变和时谐音的声学问题。它支持自动差异化,这是一种具有许多应用程序的程序转换技术,尤其是在机器学习和科学计算中。J-Wave由模块化组件组成,可以轻松定制和重复使用。同时,它与一些最受欢迎的机器学习库(例如JAX和TensorFlow)兼容。对于广泛使用的K-Wave工具箱和一系列声学仿真软件,评估了已知配置的仿真结果的准确性。可从https://github.com/ucl-bug/jwave获得J-Wave。
translated by 谷歌翻译
1972年出现了经典的COX模型,促进了如何使用生物医学中的事实分析来量化患者预后的突破。该模型最有用的特征之一是分析中变量的解释性。但是,这是以引入有关回归模型功能形式的强有力的假设的代价。为了打破这一差距,本文旨在利用新的套索神经网络在间隔进行审查的设置中利用经典COX模型的解释性优势,该网络同时选择最相关的变量,同时量化预测因子和生存时间之间的非线性关系。在广泛的模拟研究中,新方法的增益在经验上进行了说明,其中涉及线性和非线性地面依赖性的示例。我们还证明了我们在NHANES 2003-2006波的生理,临床和加速度计分析中的策略表现,以预测体育活动对患者存活的影响。我们的方法的表现优于使用传统Cox模型的文献中的先前结果。
translated by 谷歌翻译
可微分的模拟器是一个新兴的概念,其中包括几个领域的应用,从加固学习到最佳控制。它们的显着特征是能够在输入参数上计算分析梯度。与神经网络一样,通过构成称为层的若干构建块而构建,模拟通常需要计算操作员的输出,其本身可以将其自身分解成在一起的基本单元。虽然神经网络的每层代表特定的离散操作,但是相同的操作员可以具有多个表示,这取决于所采用的离散化和需要解决的研究问题。这里,我们提出了一种简单的设计模式来构造可分辨率的运算符和离散化的库,通过代表运算符作为连续功能的家庭之间的映射,由有限载体参数化。我们展示了声学优化问题上的方法,其中使用傅立叶光谱方法离散化的亥姆霍兹方程,并且使用梯度下降来说明可分辨率,以优化声透镜的声音速度。建议的框架是开放的,可用于\ url {https:/github.com/ucl-bug/jaxdf}
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Machine learning is the dominant approach to artificial intelligence, through which computers learn from data and experience. In the framework of supervised learning, for a computer to learn from data accurately and efficiently, some auxiliary information about the data distribution and target function should be provided to it through the learning model. This notion of auxiliary information relates to the concept of regularization in statistical learning theory. A common feature among real-world datasets is that data domains are multiscale and target functions are well-behaved and smooth. In this paper, we propose a learning model that exploits this multiscale data structure and discuss its statistical and computational benefits. The hierarchical learning model is inspired by the logical and progressive easy-to-hard learning mechanism of human beings and has interpretable levels. The model apportions computational resources according to the complexity of data instances and target functions. This property can have multiple benefits, including higher inference speed and computational savings in training a model for many users or when training is interrupted. We provide a statistical analysis of the learning mechanism using multiscale entropies and show that it can yield significantly stronger guarantees than uniform convergence bounds.
translated by 谷歌翻译